A dimension-reducing method for unconstrained optimization
نویسندگان
چکیده
منابع مشابه
A dimension-reducing method for unconstrained optimization
A new method for unconstrained optimization in •" is presented. This method reduces the dimension of the problem in such a way that it can lead to an iterative approximate formula for the computation of (n 1) components of the optimum while its remaining component is computed separately using the final approximations of the other components. It converges quadratically to a local optimum and it ...
متن کاملA Dimension–reducing Conic Method for Unconstrained Optimization
In this paper we present a new algorithm for finding the unconstrained minimum of a twice–continuously differentiable function f(x) in n variables. This algorithm is based on a conic model function, which does not involve the conjugacy matrix or the Hessian of the model function. The basic idea in this paper is to accelerate the convergence of the conic method choosing more appropriate points x...
متن کاملDimension Reducing Methods for Systems of Nonlinear Equations and Unconstrained Optimization: a Review
The purpose of this report is to review a new class of methods we have proposed for solving systems of nonlinear equations and optimization problems, named Dimension Reducing Methods. These methods are based on reduction to simpler one-dimensional nonlinear equations. Although these methods use reduction to simpler one, they converge quadratically, and incorporate the advantages of nonlinear SO...
متن کاملA Trust-region Method using Extended Nonmonotone Technique for Unconstrained Optimization
In this paper, we present a nonmonotone trust-region algorithm for unconstrained optimization. We first introduce a variant of the nonmonotone strategy proposed by Ahookhosh and Amini cite{AhA 01} and incorporate it into the trust-region framework to construct a more efficient approach. Our new nonmonotone strategy combines the current function value with the maximum function values in some pri...
متن کاملA Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems
In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Computational and Applied Mathematics
سال: 1996
ISSN: 0377-0427
DOI: 10.1016/0377-0427(95)00174-3